Goto

Collaborating Authors

 hierarchical granularity transfer learning



Hierarchical Granularity Transfer Learning

Neural Information Processing Systems

In the real world, object categories usually have a hierarchical granularity tree. Nowadays, most researchers focus on recognizing categories in a specific granularity, \emph{e.g.,} basic-level or sub(ordinate)-level. Compared with basic-level categories, the sub-level categories provide more valuable information, but its training annotations are harder to acquire. Therefore, an attractive problem is how to transfer the knowledge learned from basic-level annotations to sub-level recognition. In this paper, we introduce a new task, named Hierarchical Granularity Transfer Learning (HGTL), to recognize sub-level categories with basic-level annotations and semantic descriptions for hierarchical categories. Different from other recognition tasks, HGTL has a serious granularity gap,~\emph{i.e.,} the two granularities share an image space but have different category domains, which impede the knowledge transfer. To this end, we propose a novel Bi-granularity Semantic Preserving Network (BigSPN) to bridge the granularity gap for robust knowledge transfer. Explicitly, BigSPN constructs specific visual encoders for different granularities, which are aligned with a shared semantic interpreter via a novel subordinate entropy loss. Experiments on three benchmarks with hierarchical granularities show that BigSPN is an effective framework for Hierarchical Granularity Transfer Learning.



Review for NeurIPS paper: Hierarchical Granularity Transfer Learning

Neural Information Processing Systems

Summary and Contributions: The paper proposes a new task named Hierarchical Granularity Transfer Learning (HGTL) and a new network architecture called Bi-granularity Semantic Preserving Network (BigSPN). HGTL has only basic category labels and semantic descriptions for hierarchical categories. The goal is to recognize sub-category levels without annotations for sub-category levels. In this paper, 2 levels (basic, subordinate) are considered. Semantic descriptions are typically attributes, keywords or text descriptions.


Review for NeurIPS paper: Hierarchical Granularity Transfer Learning

Neural Information Processing Systems

R1 and R3 comment that the paper lacks mathematical grounding and novelty. However, R2 and R4 both think that the paper proposes an interesting and useful task and could be adopted by vision researchers. I think the paper should be accepted.

  hierarchical granularity transfer learning, neurips paper

Hierarchical Granularity Transfer Learning

Neural Information Processing Systems

In the real world, object categories usually have a hierarchical granularity tree. Nowadays, most researchers focus on recognizing categories in a specific granularity, \emph{e.g.,} basic-level or sub(ordinate)-level. Compared with basic-level categories, the sub-level categories provide more valuable information, but its training annotations are harder to acquire. Therefore, an attractive problem is how to transfer the knowledge learned from basic-level annotations to sub-level recognition. In this paper, we introduce a new task, named Hierarchical Granularity Transfer Learning (HGTL), to recognize sub-level categories with basic-level annotations and semantic descriptions for hierarchical categories.